我们呈现横梁,一个大规模数据集,包括1500多个语言对的165万次交叉文章摘要样本,构成了45种语言。我们使用多语言XL-SUM数据集,并通过使用语言 - 不可知的表示模型通过跨语言检索对齐以不同语言编写的相同文章。我们提出了一种多级数据采样算法和微调MT5,这是一种多语言预制模型,具有横梁的明确交叉监管,并引入了评估交叉综述的新度量。成立和我们拟议的指标的结果表明,即使源和目标语言对遥远的速度和目标语言对,也表明,即使源极和目标语言对遥远的速度,也表明模型优于概要概述+翻译基线。据我们所知,Crosssum是最大的交叉汇总数据集,也是第一个不依赖英语作为枢轴语。我们正在发布数据集,对齐和培训脚本以及模型,以促使未来的交叉抽象摘要研究。可以在\ url {https://github.com/csebuetnlp/crosssum}中找到资源。
translated by 谷歌翻译
近期和快速转变为大流行迅速的数字学习,也受到数字工具和平台无处不在的可用性的影响,使数字学习更加接近。扩展数字学习和教学中最困难的部分中的一个积分和一个是能够评估学习者的知识和能力。教育者可以录制讲座或创造数字内容,可以传递到数千名学习者,但评估学习者是非常耗时的。在本文中,我们提出了基于人工智能(AI)的解决方案,即VidVersityQG,用于自动从预先记录的视频讲座产生问题。基于从视频推断的上下文和语义信息,该解决方案可以自动生成不同类型的评估问题(包括短答案,多项选择,真/假并填写空白问题)。所提出的解决方案采用以人为本的方法,其中教师提供了修改/编辑任何AI生成的问题的能力。这种方法鼓励教师参与教育的使用和实施教育。评估了基于AI的解决方案,以便通过我们的行业合作伙伴Vidversity提供给我们的多个域名的经验丰富的教学专业人员和117名教育视频的准确性。 VidVersityQG解决方案显示有希望自动从视频产生高质量问题,从而大大减少了在手动问题中为教育工作者的时间和精力。
translated by 谷歌翻译
寻找专家在推动成功的合作和加快高质量研究开发和创新方面起着至关重要的作用。但是,科学出版物和数字专业知识的快速增长使确定合适的专家是一个具有挑战性的问题。根据向量空间模型,文档语言模型和基于图形的模型,可以将寻找给定主题的专家的现有方法分类为信息检索技术。在本文中,我们建议$ \ textit {expfinder} $,一种用于专家发现的新合奏模型,该模型集成了一种新颖的$ n $ gram-gram vector空间模型,称为$ n $ vsm和基于图的模型,并表示作为$ \ textit {$ \ mu $ co-hits} $,这是共同算法的拟议变体。 $ n $ vsm的关键是利用$ n $ gram单词和$ \ textIt {expfinder} $ compriese $ n $ vsm的最新反向文档频率加权方法中的实现专家发现。与六个不同的专家发现模型相比,我们在四个不同的数据集上全面评估$ \ textit {expfinder} $。评估结果表明,$ \ textit {expfinder} $是专家发现的高效模型,显着优于19%至160.2%的所有比较模型。
translated by 谷歌翻译
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-balanced Re-weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-balanced Re-weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 \& V6 show the performances and generality of the SCR with the traditional SGG models.
translated by 谷歌翻译
In the field of cross-modal retrieval, single encoder models tend to perform better than dual encoder models, but they suffer from high latency and low throughput. In this paper, we present a dual encoder model called BagFormer that utilizes a cross modal interaction mechanism to improve recall performance without sacrificing latency and throughput. BagFormer achieves this through the use of bag-wise interactions, which allow for the transformation of text to a more appropriate granularity and the incorporation of entity knowledge into the model. Our experiments demonstrate that BagFormer is able to achieve results comparable to state-of-the-art single encoder models in cross-modal retrieval tasks, while also offering efficient training and inference with 20.72 times lower latency and 25.74 times higher throughput.
translated by 谷歌翻译
Deep learning has been widely used for protein engineering. However, it is limited by the lack of sufficient experimental data to train an accurate model for predicting the functional fitness of high-order mutants. Here, we develop SESNet, a supervised deep-learning model to predict the fitness for protein mutants by leveraging both sequence and structure information, and exploiting attention mechanism. Our model integrates local evolutionary context from homologous sequences, the global evolutionary context encoding rich semantic from the universal protein sequence space and the structure information accounting for the microenvironment around each residue in a protein. We show that SESNet outperforms state-of-the-art models for predicting the sequence-function relationship on 26 deep mutational scanning datasets. More importantly, we propose a data augmentation strategy by leveraging the data from unsupervised models to pre-train our model. After that, our model can achieve strikingly high accuracy in prediction of the fitness of protein mutants, especially for the higher order variants (> 4 mutation sites), when finetuned by using only a small number of experimental mutation data (<50). The strategy proposed is of great practical value as the required experimental effort, i.e., producing a few tens of experimental mutation data on a given protein, is generally affordable by an ordinary biochemical group and can be applied on almost any protein.
translated by 谷歌翻译
Three-dimensional (3D) ultrasound imaging technique has been applied for scoliosis assessment, but current assessment method only uses coronal projection image and cannot illustrate the 3D deformity and vertebra rotation. The vertebra detection is essential to reveal 3D spine information, but the detection task is challenging due to complex data and limited annotations. We propose VertMatch, a two-step framework to detect vertebral structures in 3D ultrasound volume by utilizing unlabeled data in semi-supervised manner. The first step is to detect the possible positions of structures on transverse slice globally, and then the local patches are cropped based on detected positions. The second step is to distinguish whether the patches contain real vertebral structures and screen the predicted positions from the first step. VertMatch develops three novel components for semi-supervised learning: for position detection in the first step, (1) anatomical prior is used to screen pseudo labels generated from confidence threshold method; (2) multi-slice consistency is used to utilize more unlabeled data by inputting multiple adjacent slices; (3) for patch identification in the second step, the categories are rebalanced in each batch to solve imbalance problem. Experimental results demonstrate that VertMatch can detect vertebra accurately in ultrasound volume and outperforms state-of-the-art methods. VertMatch is also validated in clinical application on forty ultrasound scans, and it can be a promising approach for 3D assessment of scoliosis.
translated by 谷歌翻译
Image captioning is one of the straightforward tasks that can take advantage of large-scale web-crawled data which provides rich knowledge about the visual world for a captioning model. However, since web-crawled data contains image-text pairs that are aligned at different levels, the inherent noises (e.g., misaligned pairs) make it difficult to learn a precise captioning model. While the filtering strategy can effectively remove noisy data, however, it leads to a decrease in learnable knowledge and sometimes brings about a new problem of data deficiency. To take the best of both worlds, we propose a noise-aware learning framework, which learns rich knowledge from the whole web-crawled data while being less affected by the noises. This is achieved by the proposed quality controllable model, which is learned using alignment levels of the image-text pairs as an additional control signal during training. The alignment-conditioned training allows the model to generate high-quality captions of well-aligned by simply setting the control signal to desired alignment level at inference time. Through in-depth analysis, we show that our controllable captioning model is effective in handling noise. In addition, with two tasks of zero-shot captioning and text-to-image retrieval using generated captions (i.e., self-retrieval), we also demonstrate our model can produce high-quality captions in terms of descriptiveness and distinctiveness. Code is available at \url{https://github.com/kakaobrain/noc}.
translated by 谷歌翻译
Automatic image colorization is a particularly challenging problem. Due to the high illness of the problem and multi-modal uncertainty, directly training a deep neural network usually leads to incorrect semantic colors and low color richness. Existing transformer-based methods can deliver better results but highly depend on hand-crafted dataset-level empirical distribution priors. In this work, we propose DDColor, a new end-to-end method with dual decoders, for image colorization. More specifically, we design a multi-scale image decoder and a transformer-based color decoder. The former manages to restore the spatial resolution of the image, while the latter establishes the correlation between semantic representations and color queries via cross-attention. The two decoders incorporate to learn semantic-aware color embedding by leveraging the multi-scale visual features. With the help of these two decoders, our method succeeds in producing semantically consistent and visually plausible colorization results without any additional priors. In addition, a simple but effective colorfulness loss is introduced to further improve the color richness of generated results. Our extensive experiments demonstrate that the proposed DDColor achieves significantly superior performance to existing state-of-the-art works both quantitatively and qualitatively. Codes will be made publicly available.
translated by 谷歌翻译
Obtaining ground truth data in medical imaging has difficulties due to the fact that it requires a lot of annotating time from the experts in the field. Also, when trained with supervised learning, it detects only the cases included in the labels. In real practice, we want to also open to other possibilities than the named cases while examining the medical images. As a solution, the need for anomaly detection that can detect and localize abnormalities by learning the normal characteristics using only normal images is emerging. With medical image data, we can design either 2D or 3D networks of self-supervised learning for anomaly detection task. Although 3D networks, which learns 3D structures of the human body, show good performance in 3D medical image anomaly detection, they cannot be stacked in deeper layers due to memory problems. While 2D networks have advantage in feature detection, they lack 3D context information. In this paper, we develop a method for combining the strength of the 3D network and the strength of the 2D network through joint embedding. We also propose the pretask of self-supervised learning to make it possible for the networks to learn efficiently. Through the experiments, we show that the proposed method achieves better performance in both classification and segmentation tasks compared to the SoTA method.
translated by 谷歌翻译